The main challenge for fine-grained few-shot image classification is to learn feature representations with higher inter-class and lower intra-class variations, with a mere few labelled samples. Conventional few-shot learning methods however cannot be naively adopted for this fine-grained setting -- a quick pilot study reveals that they in fact push for the opposite (i.e., lower inter-class variations and higher intra-class variations). To alleviate this problem, prior works predominately use a support set to reconstruct the query image and then utilize metric learning to determine its category. Upon careful inspection, we further reveal that such unidirectional reconstruction methods only help to increase inter-class variations and are not effective in tackling intra-class variations. In this paper, we for the first time introduce a bi-reconstruction mechanism that can simultaneously accommodate for inter-class and intra-class variations. In addition to using the support set to reconstruct the query set for increasing inter-class variations, we further use the query set to reconstruct the support set for reducing intra-class variations. This design effectively helps the model to explore more subtle and discriminative features which is key for the fine-grained problem in hand. Furthermore, we also construct a self-reconstruction module to work alongside the bi-directional module to make the features even more discriminative. Experimental results on three widely used fine-grained image classification datasets consistently show considerable improvements compared with other methods. Codes are available at: https://github.com/PRIS-CV/Bi-FRN.
translated by 谷歌翻译
智能药物输送手推车是一种先进的智能药物输送设备。与传统的手动药物输送相比,它具有较高的药物输送效率和较低的错误率。在这个项目中,设计和制造了一款智能的药车,可以通过视觉识别技术识别道路路线和目标病房的房间数量。手推车根据已确定的房间数选择相应的途径,将药物准确地运送到目标病房,并在输送药物后返回药房。智能药物输送车使用直流电源,电动机驱动模块控制两个直流电动机,这克服了转弯角度过度偏差的问题。手推车线检查功能使用闭环控制来提高线路检查的准确性和手推车速度的可控性。病房号的识别由摄像机模块使用微控制器完成,并且具有自适应调整环境亮度,失真校正,自动校准等的功能。蓝牙模块实现了两个合作药物交付车之间的通信,该模块实现了高效,准确的沟通和互动。实验表明,智能毒品输送车可以准确地识别房间的数量,并计划将毒品运送到远处,中间和附近病房的路线,并具有快速和准确的判断的特征。此外,有两个药车可以合作,以高效率和高合作的方式向同一病房运送药物。
translated by 谷歌翻译
组成零射击学习(CZSL)旨在使用从训练集中的属性对象组成中学到的知识来识别新的构图。先前的作品主要将图像和组合物投影到共同的嵌入空间中,以衡量其兼容性得分。但是,属性和对象都共享上面学到的视觉表示,导致模型利用虚假的相关性和对可见对的偏见。取而代之的是,我们重新考虑CZSL作为分布的概括问题。如果将对象视为域,我们可以学习对象不变的功能,以识别任何对象附加的属性。同样,当识别具有属性为域的对象时,还可以学习属性不变的功能。具体而言,我们提出了一个不变的特征学习框架,以在表示和梯度级别上对齐不同的域,以捕获与任务相关的内在特征。对两个CZSL基准测试的实验表明,所提出的方法显着优于先前的最新方法。
translated by 谷歌翻译
2019年冠状病毒为全球社会稳定和公共卫生带来了严重的挑战。遏制流行病的一种有效方法是要求人们在公共场所戴口罩,并通过使用合适的自动探测器来监视戴口罩状态。但是,现有的基于深度学习的模型努力同时达到高精度和实时性能的要求。为了解决这个问题,我们提出了基于Yolov5的改进的轻质面膜探测器,该检测器可以实现精确和速度的良好平衡。首先,提出了将ShuffleNetV2网络与协调注意机制相结合的新型骨干轮弹工具作为骨干。之后,将有效的路径攻击网络BIFPN作为特征融合颈应用。此外,在模型训练阶段,定位损失被α-CIOU取代,以获得更高质量的锚。还利用了一些有价值的策略,例如数据增强,自适应图像缩放和锚点群集操作。 Aizoo面膜数据集的实验结果显示了所提出模型的优越性。与原始的Yolov5相比,提出的模型将推理速度提高28.3%,同时仍将精度提高0.58%。与其他七个现有型号相比,它的最佳平均平均精度为95.2%,比基线高4.4%。
translated by 谷歌翻译
尽管在细粒度的视觉分类(FGVC)上进行了巨大的进步,但目前的方法仍然依赖于全面监督的范式,呼叫充足的专家标签。半监督学习(SSL)技术,从未标记的数据获取知识,提供了相当大的手段,并为粗粒度问题表示了很大的承诺。但是,退出SSL范例主要假设分销(即,类别对齐的)未标记数据,这在重新提出FGVC时阻碍了其有效性。在本文中,我们提出了一种专门针对半监督FGVC的分发数据工作的新颖设计,即“将它们联系在”。我们拆除了所有细粒度类别自然遵循等级结构的重要假设(例如,“AVES”的所有鸟类的“AVES”的系统发育树)。因此,我们可以在单个样本上运行,而是可以将该树结构内的示例关系预测为SSL的优化目标。除此之外,我们进一步推出了这两种策略,这些树结构唯一带来了唯一的一致性正则化和可靠的伪关系。我们的实验结果表明,(i)所提出的方法产生良好的鲁棒性,与分发数据产生良好的稳健性,(ii)它可以配备现有技术,提高它们的性能,从而产生最先进的结果。代码可在https://github.com/pris-cv/relmatch提供。
translated by 谷歌翻译
与细粒度的视觉分类(FGVC)一样强大,响应您的查询与“Whip-Porl-Will”或“Mallard”的鸟类名称可能没有太大意义。然而,这通常在文献中接受,强调了一个基本的问题,互相和人类 - 从ai中学习的人类学习的可转让知识是什么?本文旨在使用FGVC作为测试床来回答这个问题。具体而言,我们设想一个场景,其中训练有素的FGVC模型(AI专家)作为知识提供商在使普通人(您和我)成为我们自己,即能够区分“Whip-Porp-Will”的人员和“野鸭”。图1列出了我们在回答这个问题时的方法。假设使用专家的人类标签培训的AI专家,我们问(i)我们可以从AI中提取的最佳可转让知识,并(ii)在给出这种知识的情况下衡量专业知识中的收益的最实用手段是什么?在前前,我们建议代表知识,作为专家专业的高度辨别性视觉区域。为此,我们设计了一个多阶段学习框架,它开始在差异地蒸馏他们的差异以获得专家专用知识之前建模域专家和新手的视觉注意。对于后者,我们模拟评估过程作为最佳习惯对人类习惯的学习实践的书籍指南。综合人类研究了15,000名试验表明,我们的方法能够始终如一地改善不同的鸟类专业知识,以识别一次无法辨认的鸟类。有趣的是,当所限制的提取知识被用作实现歧视性定位的手段时,我们的方法也会导致传统的FGVC性能。代码可用于:https://github.com/pris-cv/making-a-bird-ai-expert-work-for-you-and-me
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译
It has been observed in practice that applying pruning-at-initialization methods to neural networks and training the sparsified networks can not only retain the testing performance of the original dense models, but also sometimes even slightly boost the generalization performance. Theoretical understanding for such experimental observations are yet to be developed. This work makes the first attempt to study how different pruning fractions affect the model's gradient descent dynamics and generalization. Specifically, this work considers a classification task for overparameterized two-layer neural networks, where the network is randomly pruned according to different rates at the initialization. It is shown that as long as the pruning fraction is below a certain threshold, gradient descent can drive the training loss toward zero and the network exhibits good generalization performance. More surprisingly, the generalization bound gets better as the pruning fraction gets larger. To complement this positive result, this work further shows a negative result: there exists a large pruning fraction such that while gradient descent is still able to drive the training loss toward zero (by memorizing noise), the generalization performance is no better than random guessing. This further suggests that pruning can change the feature learning process, which leads to the performance drop of the pruned neural network. Up to our knowledge, this is the \textbf{first} generalization result for pruned neural networks, suggesting that pruning can improve the neural network's generalization.
translated by 谷歌翻译
Time-series anomaly detection is an important task and has been widely applied in the industry. Since manual data annotation is expensive and inefficient, most applications adopt unsupervised anomaly detection methods, but the results are usually sub-optimal and unsatisfactory to end customers. Weak supervision is a promising paradigm for obtaining considerable labels in a low-cost way, which enables the customers to label data by writing heuristic rules rather than annotating each instance individually. However, in the time-series domain, it is hard for people to write reasonable labeling functions as the time-series data is numerically continuous and difficult to be understood. In this paper, we propose a Label-Efficient Interactive Time-Series Anomaly Detection (LEIAD) system, which enables a user to improve the results of unsupervised anomaly detection by performing only a small amount of interactions with the system. To achieve this goal, the system integrates weak supervision and active learning collaboratively while generating labeling functions automatically using only a few labeled data. All of these techniques are complementary and can promote each other in a reinforced manner. We conduct experiments on three time-series anomaly detection datasets, demonstrating that the proposed system is superior to existing solutions in both weak supervision and active learning areas. Also, the system has been tested in a real scenario in industry to show its practicality.
translated by 谷歌翻译
As an important variant of entity alignment (EA), multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) with multiple modalities like images. However, current MMEA algorithms all adopt KG-level modality fusion strategies but ignore modality differences among individual entities, hurting the robustness to potential noise involved in modalities (e.g., unidentifiable images and relations). In this paper we present MEAformer, a multi-modal entity alignment transformer approach for meta modality hybrid, to dynamically predict the mutual correlation coefficients among modalities for instance-level feature fusion. A modal-aware hard entity replay strategy is also proposed for addressing vague entity details. Extensive experimental results show that our model not only achieves SOTA performance on multiple training scenarios including supervised, unsupervised, iterative, and low resource, but also has limited parameters, optimistic speed, and good interpretability. Our code will be available soon.
translated by 谷歌翻译